Goto

Collaborating Authors

 marginal regression



b-bit Marginal Regression

Neural Information Processing Systems

We consider the problem of sparse signal recovery from m linear measurements quantized to b bits. We study the question of choosing b in the setting of a given budget of bits B m \cdot b and derive a single easy-to-compute expression characterizing the trade-off between m and b . The choice b 1 turns out to be optimal for estimating the unit vector corresponding to the signal for any level of additive Gaussian noise before quantization as well as for adversarial noise. For b \geq 2, we show that Lloyd-Max quantization constitutes an optimal quantization scheme and that the norm of the signal canbe estimated consistently by maximum likelihood.



b-bit Marginal Regression

Slawski, Martin, Li, Ping

Neural Information Processing Systems

We consider the problem of sparse signal recovery from $m$ linear measurements quantized to $b$ bits. We study the question of choosing $b$ in the setting of a given budget of bits $B m \cdot b$ and derive a single easy-to-compute expression characterizing the trade-off between $m$ and $b$. The choice $b 1$ turns out to be optimal for estimating the unit vector corresponding to the signal for any level of additive Gaussian noise before quantization as well as for adversarial noise. For $b \geq 2$, we show that Lloyd-Max quantization constitutes an optimal quantization scheme and that the norm of the signal canbe estimated consistently by maximum likelihood. Papers published at the Neural Information Processing Systems Conference.


b-bit Marginal Regression

Slawski, Martin, Li, Ping

Neural Information Processing Systems

We consider the problem of sparse signal recovery from $m$ linear measurements quantized to $b$ bits. $b$-bit Marginal Regression is proposed as recovery algorithm. We study the question of choosing $b$ in the setting of a given budget of bits $B = m \cdot b$ and derive a single easy-to-compute expression characterizing the trade-off between $m$ and $b$. The choice $b = 1$ turns out to be optimal for estimating the unit vector corresponding to the signal for any level of additive Gaussian noise before quantization as well as for adversarial noise. For $b \geq 2$, we show that Lloyd-Max quantization constitutes an optimal quantization scheme and that the norm of the signal canbe estimated consistently by maximum likelihood.


Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations

Balasubramanian, Krishnakumar, Yu, Kai, Lebanon, Guy

arXiv.org Machine Learning

We propose and analyze a novel framework for learning sparse representations, based on two statistical techniques: kernel smoothing and marginal regression. The proposed approach provides a flexible framework for incorporating feature similarity or temporal information present in data sets, via non-parametric kernel smoothing. We provide generalization bounds for dictionary learning using smooth sparse coding and show how the sample complexity depends on the L1 norm of kernel function used. Furthermore, we propose using marginal regression for obtaining sparse codes, which significantly improves the speed and allows one to scale to large dictionary sizes easily. We demonstrate the advantages of the proposed approach, both in terms of accuracy and speed by extensive experimentation on several real data sets. In addition, we demonstrate how the proposed approach could be used for improving semi-supervised sparse coding.